LLM 0.17 release enables multi-modal input, allowing users to send images, audio, and video files to Large Language Models like GPT-4o, Llama, and Gemini, with a Python API and cost-effective pricing.
Simon Willison recently delivered a talk during the Mastering LLMs: A Conference For Developers & Data Scientists, which was a six-week long online event. The talk centered around Simon's LLM Python command-line utility and its plugins, emphasizing how they can be utilized to explore Large Language Models (LLMs) and perform various tasks. Last week, he discussed accessing LLMs from the command-line, sharing valuable insights and techniques with the audience.
* **http.server**: Run a localhost web server on port 8000: `python -m http.server`
* **base64**: Encode/decode base64: `python -m base64 -h`
* **asyncio**: Python console with top-level `await`: `python -m asyncio`
* **tokenize**: Debug mode for Python tokenizer: `python -m tokenize cgi.py`
* **ast**: Debug mode for Python AST module: `python -m ast cgi.py`
* **json.tool**: Pretty-print JSON: `echo '{"foo": "bar"}' | python -m json.tool`
* **random**: Benchmarking suite for random number generators (fixed in Python 3.13)
* **nntplib**: Display latest articles in a newsgroup: `python -m nntplib`
* **calendar**: Show a calendar for the current year: `python -m calendar` (with options like `-t html`)
The author has also automated their weeknotes by using an Observable notebook, which generates the "releases this week" and "TILs this week" sections.
The notebook fetches TILs from the author's Datasette, grabs releases from GitHub, and assembles a markdown string for the new post.
* `llm` CLI tool for running prompts against large language models
* Automation of weeknotes using an Observable notebook
* Notebook generates "releases this week" and "TILs this week" sections
* Tool stores prompts and responses in a SQLite database